Directory structure
Hadoop cluster (CDH4) practice (0) PrefaceHadoop cluster (CDH4) Practice (1) Hadoop (HDFS) buildHadoop cluster (CDH4) Practice (2) Hbasezookeeper buildHadoop cluster (CDH4) Practice (3) Hive BuildHadoop cluster (CHD4) Practice (4) Oozie build
Hadoop cluster (CDH4) practice (0) Preface
During my t
the namenode, secondarynamenode, and jobtracker processes. The slave machine must include the datanode and tasktracker processes to ensure successful startup.
To stop, run $ hadoop_home/bin/stop-all.sh
HadoopQuery interface
Http: // master machine IP Address: 50070/dfshealth. jsp
Http: // master machine IP Address: 50030/jobtracker. jsp
HadoopCommon commands
Hadoop DFS-ls is to view the content in the/usr/
master HBase Enterprise-level development and management• Ability to master pig Enterprise-level development and management• Ability to master hive Enterprise-level development and management• Ability to use Sqoop to freely convert data from traditional relational databases and HDFs• Ability to collect and manage distributed logs using Flume• Ability to master the entire process of analysis, development, and deployment of
Original articles, reproduced please mark from http://blog.csdn.net/lsttoy/article/details/53406710.
The first step :Download the latest hive and go directly to Apache to find hive2.1.0 download on the line.
Step two , unzip to the server
Tar zxvf apache-hive-2.0.0-bin.tar.gz
mv apache-hive-2.0.0-bin/home/hive
The
master HBase Enterprise-level development and management• Ability to master pig Enterprise-level development and management• Ability to master hive Enterprise-level development and management• Ability to use Sqoop to freely convert data from traditional relational databases and HDFs• Ability to collect and manage distributed logs using Flume• Ability to master the entire process of analysis, development, and deployment of
Chapter 1: IntroductionRecently, the telecommunications group held a big data technology training class, according to the requirements, Hadoop small white I made a comparison between the two, to do a practical operation to do a record it, hey ...The similarities between the two:Both 1.hbase and Hive are architected on top of Hadoop. is using
In the example of importing other table data into a table, we created a new table score1 and inserted the data into the score1 with the SQL statement. This is just a list of the above steps.
Inserting data
Insert into table score1 partition (openingtime=201509values (1,' (2,'a');
--------------------------------------------------------------------
Here, the content of this chapter is complete.
Analog data File Download
Github Https://github.com/sinodzh/HadoopExample/t
slow.Linux Command for counting the size of all directories and total directories in a directory
du -h --max-depth=1 /home/crazyant/
Count the size of all files in the crazyant directory. Here I only want to see the size of a directory. Therefore, if-max-depth = 1 is added, this command recursively lists the file sizes of all subdirectories.
Use of the scp command:
Copy from local to remote: scp-r? Logs_jx pss@crazyant.net/home/pss/logsHive command hive
shellHadoop Job-kill job_201403041453_5831512.hive-wui Pathhttp://172.17.41.38/jobtracker.jsp13. Deleting a partitionALTER TABLE Tmp_h02_click_log_baitiao drop partition (dt= ' 2014-03-01 ');ALTER TABLE D_h02_click_log_basic_d_fact drop partition (dt= ' 2014-01-17 ');14.hive command-line OperationsExecute a query, show the progress of MapReduce on the terminal, after execution, finally output the query res
Hive command Line Common commandsLoading dataLoad data local inpath '/home/ivr_csr_menu_map.txt ' into table ivr_csr_menu_map;Partitioned by:Load data local inpath '/home/lftest/lf1.txt ' overwrite into table lf_test partition (dt=20150927);Add overwrite will overwrite the original data (if any) if not added, and the original data, a copy file will be generatedLoad data local inpath '/home/lftest/lf1.txt ' overwrite into table lf_test partition (dt=20
/ci_cuser_20141231141853691/* ' >ci_cusere_20141231141853691.csv echo $?~/.bash_profile: Each user can use this file to enter shell information dedicated to their own use, when the user logs on, theThe file is only executed once! By default, he sets some environment variables to execute the user's. bashrc file.Hadoop fs-cat ' $1$2/* ' >$3.csvMV $3.csv/home/ocdc/cocString command = "CD" + Ciftpinfo.getftppath () + "" +hadooppath+ "Hadoop fs-cat '/user
Preface: Well, it's just a little bit more comfortable without writing code, but we can't slack off. The hive operation's file needs to be loaded from hereSimilar to the Linux commands, the command line begins with the Hadoop FS -(dash) LS / list file or directory cat Hadoop FS -cat ./hello.txt/opt/old/ Htt/h
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.